Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Intrusion detection method for wireless sensor network based on bidirectional circulation generative adversarial network
LIU Yongmin, YANG Yujin, LUO Haoyi, HUANG Hao, XIE Tieqiang
Journal of Computer Applications    2023, 43 (1): 160-168.   DOI: 10.11772/j.issn.1001-9081.2021112001
Abstract308)   HTML14)    PDF (2098KB)(130)       Save
Aiming at the problems of low detection accuracy and poor generalization ability of Wreless Sensor Network (WSN) intrusion detection methods on imbalanced datasets with discrete high-dimensional features, an intrusion detection method for WSN based on Bidirectional Circulation Generative Adversarial Network was proposed, namely BiCirGAN. Firstly, Adversarially Learned Anomaly Detection (ALAD) was introduced to improve the understandability of the original features by reasonably representing the high-dimensional, discrete original features through the latent space. Secondly, the bidirectional circulation adversarial structure was adopted to ensure the consistency of bidirectional circulation in real space and latent space, thereby ensuring the stability of Generative Adversarial Network (GAN) training and improving performance of anomaly detection. At the same time, Wasserstein distance and spectral normalization optimization methods were introduced to improve the objective function of GAN to further solve the problems of mode collapse of GAN and lack of diversity of generators. Finally, because the statistical properties of intrusion attack data changed in an unpredictable way over time, a full connection layer network with Dropout operation was established to optimize the anomaly detection results. Experimental results on KDD99, UNSW-NB15 and WSN_DS datasets show that compared to Anomaly detection with GAN (AnoGAN), Bidirectional GAN (BiGAN), Multivariate Anomaly Detection with GAN (MAD-GAN) and ALAD methods, BiCirGAN has a 3.9% to 33.0% improvement in detection accuracy, and the average inference speed is 4.67 times faster than that of ALAD method.
Reference | Related Articles | Metrics
POI recommendation algorithm combining spatiotemporal information and POI importance
LI Hanlu, XIE Qing, TANG Lingli, LIU Yongjian
Journal of Computer Applications    2020, 40 (9): 2600-2605.   DOI: 10.11772/j.issn.1001-9081.2020010060
Abstract609)      PDF (846KB)(545)       Save
Aiming at the data noise filtering problem and the importance problem of different POIs in POI (Point-Of-Interest)recommendation research, a POI recommendation algorithm, named RecSI (Recommendation by Spatiotemporal information and POI Importance), was proposed. First, the geographic information and the mutual attraction between the POIs were used to filter out the data noise, so as to narrow the range of candidate set. Second, the user’s preference score was calculated by combining the user’s preference on the POI category at different time periods of the day and the popularities of the POIs. Then, the importances of different POIs were calculated by combining social information and weighted PageRank algorithm. Finally, the user’s preference score and POI importances were linearly combined in order to recommend TOP- K POIs to the user. Experimental results on real Foursquare sign-in dataset show that the precision and recall of the RecSI algorithm are higher than those of baseline GCSR (Geography-Category-Socialsentiment fusion Recommendation) algorithm by 12.5% and 6% respectively, which verify the effectiveness of RecSI algorithm.
Reference | Related Articles | Metrics
Yin-Yang-pair optimization algorithm based on chaos search and intricate operator
XU Qiuyan, MA Liang, LIU Yong
Journal of Computer Applications    2020, 40 (8): 2305-2312.   DOI: 10.11772/j.issn.1001-9081.2020010089
Abstract393)      PDF (1180KB)(421)       Save
To solve the premature convergence problem of the basic Yin-Yang-Pair Optimization (YYPO) algorithm, the chaos search was introduced to the algorithm to explore more areas based on the ergodicity of chaos, so as to improve the global exploration capability. Besides, based on the intricate operator of I Ching, opposition-based learning was adopted to search for the opposite solutions to the current ones in order to improve the local exploitation ability. The design of parallel programming was also added to the algorithm to make full use of computing resources such as multi-core processors. Benchmark functions were used for numerical experiments to test the performance of the improved YYPO algorithm combined with chaos search and intricate operator, namely CSIOYYPO. Experimental results show that, compared with YYPO algorithms including basic YYPO algorithm and adaptive YYPO algorithm as well as other intelligent optimization algorithms, CSIOYYPO algorithm has higher calculation accuracy and higher convergence speed.
Reference | Related Articles | Metrics
Adaptive most valuable player algorithm considering multiple training methods
WANG Ning, LIU Yong
Journal of Computer Applications    2020, 40 (6): 1722-1730.   DOI: 10.11772/j.issn.1001-9081.2019101815
Abstract402)      PDF (687KB)(271)       Save
The Most Valuable Player Algorithm (MVPA) is a new intelligent optimization algorithm that simulates sports competitions. It has the problems of low precision and slow convergence. An adaptive most valuable player algorithm considering multiple training methods (ACMTM-MVPA) was proposed to solve these problems. MVPA has a single initialization method, which is random and blind, reducing the convergence speed and accuracy of the algorithm. In order to enhance the level of the initial player and improve the overall strength of the initial team, the training phase was added before the competition phase of MVPA, and the neighborhood search algorithm and chaotic sequence and reverse learning algorithms were used to train and screen players; in order to enhance the player’s ability to self-explore and learn from the best player to make the player have the qualification to compete for the most valuable player trophy, an adaptive player evolution factor was added during the team competition phase. Experimental results on 15 benchmark functions show that the proposed algorithm outperforms MVPA, Particle Swarm Optimization (PSO) algorithm and Genetic Algorithm (GA) in optimization accuracy and convergence speed. Finally, an application example of ACMTM-MVPA in parameter optimization of storm intensity formula was given. The results show that this proposed algorithm is superior to accelerated genetic algorithm, traditional regression method and preferred regression method.
Reference | Related Articles | Metrics
Microscopic 3D reconstruction method based on improved iterative shrinkage thresholding algorithm
WU Qiuyu, ZHANG Mingxin, LIU Yongjun, ZHENG Jinlong
Journal of Computer Applications    2018, 38 (8): 2398-2404.   DOI: 10.11772/j.issn.1001-9081.2018010271
Abstract529)      PDF (1004KB)(311)       Save
Iterative Shrinkage Thresholding Algorithm (ISTA) often uses fixed iteration step to solve the dynamic optimization problem of depth from defocus, which leads to poor convergence efficiency and low accuracy of reconstructed microscopic 3D shape. A method based on gradient estimation of acceleration operator and secant linear search, called Fast Linear Iterative Shrinkage Thresholding Algorithm (FL-ISTA), was proposed to optimize ISTA. Firstly, the acceleration operator, which consists of the linear combination of the current and previous points, was introduced to reestimate the gradient and update the iteration point during each iteration process. Secondly, in order to change the restriction of the fixed iteration step, secant linear search was used to determine the optimal iteration step dynamically. Finally, the improved algorithm was applied to solve the dynamic optimization problem of depth from defocus, which accelerated the convergence of the algorithm and improved the accuracy of reconstructed microscopic 3D shape. Experimental results of reconstructed standard 500 nm grid show that compared with ISTA, FISTA (Fast ISTA) and MFISTA (Monotohy FISTA), the efficiency of FL-ISTA was improved and the depth from defocus decreased by 10 percentage points, which is closer to the scale of standard 500 nm grid. Compared with ISTA, the Mean Square Error (MSE) and average error of microscopic 3D shape reconstructed by FL-ISTA were decreased by 18 percentage points and 40 percentage points respectively. The experimental results indicate that FL-ISTA can effectively improve the convergence rate of solving the dynamic optimization problem of depth from defocus and elevate the accuracy of the reconstructed microscopic 3D shape.
Reference | Related Articles | Metrics
A new compressed vertex chain code
WEI Wei, DUAN Xiaodong, LIU Yongkui, GUO Chen
Journal of Computer Applications    2017, 37 (6): 1747-1752.   DOI: 10.11772/j.issn.1001-9081.2017.06.1747
Abstract553)      PDF (940KB)(437)       Save
Chain code is one kind of coding technology, which can represent the line, curve and region boundary with small data storage. In order to improve the compression efficiency of chain code, a new compression vertex chain code named Improved Orthogonal 3-Direction Vertex Chain Code (IO3DVCC) was proposed. The statistical characteristic of the Vertex Chain Code (VCC) and the directional characteristic of the OrThogonal 3-direction chain code (3OT) were combined in the proposed new chain code, 5 code values were totally set. The combination of 1, 3 and the combination of 3, 1 in VCC were merged and expressed by code 1. The expression of the code 2 was the same with the corresponding code value of VCC. The expression of code 3 was the same as the code value 2 of 3OT. Code 4 and code 5 corresponded to the two continuous code value 1 of IO3DVCC and eight continuous code values 2 of VCC respectively. Based on Huffman coding, the new chain code was the indefinite length coding. The code value probability, average expression ability, average length and efficiency of IO3DVCC, Enhanced Relative 8-Direction Freeman Chain Code (ERD8FCC), Arithmetic encoding Variable-length Relative 4-direction Freeman chain code (AVRF4), Arithmetic coding applied to 3OT chain code (Arith_3OT), Compressed VCC (CVCC), and Improved CVCC (ICVCC) were calculated aiming at the contour boundary of 100 images. The experimental results show that the efficiency of I3ODVCC is the highest. The total code number, total binary bit number, and compression ratio relative to the 8-Direction Freeman Chain Code (8DFCC) of three kinds of chain codes including IO3DVCC, Arith_3OT, and ICVCC were calculated aiming at the contour boundary of 20 randomly selected images. The experimental results demonstrate that the compression effect of IO3DVCC is the best.
Reference | Related Articles | Metrics
Word semantic similarity computation based on integrating HowNet and search engines
ZHANG Shuowang, OUYANG Chunping, YANG Xiaohua, LIU Yongbin, LIU Zhiming
Journal of Computer Applications    2017, 37 (4): 1056-1060.   DOI: 10.11772/j.issn.1001-9081.2017.04.1056
Abstract653)      PDF (844KB)(539)       Save
According to mismatch between word semantic description of "HowNet" and subjective cognition of vocabulary, in the context of making full use of rich network knowledge, a word semantic similarity calculation method combining "HowNet" and search engine was proposed. Firstly, considering the inclusion relation between word and word sememes, the preliminary semantic similarity results were obtained by using improved concept similarity calculation method. Then the further semantic similarity results were obtained by using double correlation detection algorithm and point mutual information method based on search engines. Finally, the fitting function was designed and the weights were calculated by using batch gradient descent method, and the similarity calculation results of the first two steps were fused. The experimental results show that compared with the method simply based on "HowNet" or search engines, the Spearman coefficient and Pearson coefficient of the fusion method are both improved by 5%. Meanwhile, the match degree of the semantic description of the specific word and subjective cognition of vocabulary is improved. It is proved that it is effective to integrate network knowledge background into concept similarity calculation for computing Chinese word semantic similarity.
Reference | Related Articles | Metrics
Fault detection filter design based on genetic algorithm in wireless sensor and actuator network
LIU Yong, SHEN Xuanfan, LIAO Yong, ZHAO Ming
Journal of Computer Applications    2016, 36 (3): 616-619.   DOI: 10.11772/j.issn.1001-9081.2016.03.616
Abstract476)      PDF (734KB)(453)       Save
To improve the reliability of the Wireless Sensor and Actuator Network (WSAN), an optimal design method based on Genetic Algorithm (GA) for WSAN fault detection filter was proposed. In system modeling, the influence of the wireless network transmission delay on network control system was modeled as an external noise, the composite optimization index which is composed of sensitivity and robustness was made as the design goal of fault detection filter, and the optimization objective was made as the core of GA—the fitness function. At the same time, according to the numerical characteristics of optimization objective in WSAN, the corresponding real coding, uniform mutation, arithmetic crossover and other processing methods were selected to speed up the convergence rate, meanwhile taking the accuracy of the calculation results into account. The optimized filter design mentioned herein, not only restrains the noise signal, but also amplifies the fault signal. Finally, the effectiveness of the proposed design is demonstrated by the results of Matlab/OMNET++ hybrid simulations.
Reference | Related Articles | Metrics
Relational algebraic operation algorithm on compressed data
DING Xinzhe, ZHANG Zhaogong, LI Jianzhong, TAN Long, LIU Yong
Journal of Computer Applications    2016, 36 (1): 21-26.   DOI: 10.11772/j.issn.1001-9081.2016.01.0021
Abstract619)      PDF (923KB)(374)       Save
Since in the massive data management, the compressed data can be done some operations without decompressing first, under the condition of normal distribution, according to features of column data storage, a new compression algorithm which oriented column storage, called CCA (Column Compression Algorithm), was proposed. Firstly, the length of data was classified; secondly, the sampling method was used to get more repetitive prefix; finally the dictionary coding was utilized to compress, meanwhile the Column Index (CI) and Column Reality (CR) were acted as data compression structure to reduce storage requirement of massive data storage, thus the basic relational algebraic operations such as select, project and join were directly and effectively supported. A prototype database system based on CCA, called D-DBMS (Ding-Database Management System), was implemented. The theoretical analyses and the results of experiment on 1 TB data show that the proposed compression algorithm can significantly improve the performance of massive data storage efficiency and data manipulation. Compared to BAP (Bit Address Physical) and TIDC (TupleID Center) method, the compression rate of CCA was improved by 51% and 14%, and its running speed was improved by 47% and 42%.
Reference | Related Articles | Metrics
Improved weighted centroid localization algorithm in narrow space
LIU Yong, ZHANG Jinlong, ZHANG Yanbo, WANG Tao
Journal of Computer Applications    2015, 35 (5): 1273-1275.   DOI: 10.11772/j.issn.1001-9081.2015.05.1273
Abstract542)      PDF (627KB)(593)       Save

Concerning the problem that severe signal multipath effect, low accuracy of sensor node positioning, etc. in narrow space, a new method using Weighted Centroid Localization (WCL) algorithm based on Received Signal Strength Indicator (RSSI) was proposed. The algorithm was used in scenarios with characteristics of long and narrow strip space, and it could dynamically acquire the decline index of path by RSSI and distance of neighbor beacon node signal, improve the environmental adaptation of RSSI distance detection algorithm. In addition, the algorithm based on environment improved weight coefficient of weighted centroid algorithm by introducing correction factor, which improved the accuracy of localization. Theoretical analysis and simulation results show that the algorithm has been optimized to adapt to narrow space. As compared with the Weighted Centroid Localization (WCL) algorithm, in roadway environment with the width of 3 m, 5 m, 8 m, 10 m respectively and 10 beacon nodes, positioning precision increases 22.1%, 19.2%, 16.1% and 16.5% respectively, the stability increases 23.4%, 21.5%, 18.1% and 15.4% respectively.

Reference | Related Articles | Metrics
Traffic balancing routing algorithm for wireless mesh networks based on Grover search
LIU Yongguang
Journal of Computer Applications    2014, 34 (7): 1956-1959.   DOI: 10.11772/j.issn.1001-9081.2014.07.1956
Abstract213)      PDF (547KB)(387)       Save

In applications of Wireless Mesh Networks (WMN), users can access Internet through mesh gateways. This architecture is prone to cause traffic unbalance between mesh routers located at different places, make some mesh routers become bottleneck and hence affect network performance and user's Quality of Service (QoS). To solve this problem, a traffic balancing routing algorithm based on Grover quantum search algorithm was presented. In this algorithm, the parallel character of quantum computation was utilized. The operation matrix was constructed according to model of traffic balancing function. The traffic balancing paths were gotten by Grover iteration. Simulations show that the paths selected by the algorithm can balance traffic of WMN effectively and make the minimum bandwidth every user got maximized. The executive efficiency of the algorithm is also better than the similar ones.

Reference | Related Articles | Metrics
Double four-step route phase-shifting average algorithm
CHEN Liwei LIU Yong BI Guotang JIANG Yong
Journal of Computer Applications    2014, 34 (6): 1830-1833.   DOI: 10.11772/j.issn.1001-9081.2014.06.1830
Abstract222)      PDF (724KB)(382)       Save

Gama nonlinearity and random noise caused by optical devices are two main phase errors on structured light projection. Double three-step phase-shifting algorithm has the unique advantage on inhabiting both of them, but there remains two drawbacks in its measuring result including higher nonlinear error and lower measuring pecision. A double four-step route phase-shifting average algorithm was proposed for resolving above questions, which applied the idea of phase-aligning average in four-step phase-shifting algorithm to lower the effect of nonlinear error, and put forward a phase average method of phase-field space transform in multi-frequency heterodyne to weaken random noise and improve meauring precision. The experimental results show that the proposed method has higher accuracy and adaptability in phase unwrapping.

Reference | Related Articles | Metrics
Parking guidance system based on ZigBee and geomagnetic sensor technology
YUE Xuejun LIU Yongxin WANG Yefu CHEN Shurong LIN Da QUAN Dongping YAN Yingwei
Journal of Computer Applications    2014, 34 (3): 884-887.   DOI: 10.11772/j.issn.1001-9081.2014.03.0884
Abstract671)      PDF (601KB)(986)       Save

Concerning the phenomenon that common parking service could not satisfy the increasing demand of the private vehicle owners, an intelligent parking guidance system based on ZigBee network and geomagnetic sensors was designed. Real-time vehicle position or related traffic information was collected by geomagnetic sensors around parking lots and updated to center sever via ZigBee network. On the other hand, out-door Liquid Crystal Display (LCD) screens controlled by center sever displayed information of available parking places. In this paper, guidance strategy was divided into 4 levels, which could provide clear and effective information to drivers. The experimental results prove that the distance detection accuracy of geomagnetic sensors was within 0.4m, and the lowest loss packet rate of the wireless network in the range of 150m is 0%. This system can possibly provide solution for better parking service in intelligent cities.

Related Articles | Metrics
Image resampling tampering detection based on further resampling
LIU Yi LIU Yongben
Journal of Computer Applications    2014, 34 (3): 815-819.   DOI: 10.11772/j.issn.1001-9081.2014.03.0815
Abstract766)      PDF (771KB)(518)       Save

Resampling is a typical operation in image forgery, since most of the existing resampling tampering detection algorithms for JPEG images are not so powerful and inefficient in estimating the zoom factor accurately, an image resampling detection algorithm via further resampling was proposed. First, a JPEG compressed image was resampled again with a scaling factor less than 1, to reduce the effects of JPEG compression in image file saving. Then the cyclical property of the second derivative of a resampled signal was adopted for resampling operation detection. The experimental results show that the proposed algorithm is robust to JPEG compression, and in this manner, the real zoom factor may be accurately estimated and thus useful for resampling operation detection when a synthesized image is formed from resampled original images with different scaling factors.

Related Articles | Metrics
Cloud framework for hierarchical batch-factor algorithm
YUAN Xinhui LIU Yong QI Fengbin
Journal of Computer Applications    2014, 34 (3): 690-694.   DOI: 10.11772/j.issn.1001-9081.2014.03.0690
Abstract485)      PDF (1002KB)(333)       Save

Bernstein’s Batch-factor algorithm can test B-smoothness of a lot of integers in a short time. But this method costs so much memory that it’s widely used in theory analyses but rarely used in practice. Based on splitting product of primes into pieces, a hierarchical batch-factor algorithm cloud framework was proposed to solve this problem. This hierarchical framework made the development clear and easy, and could be easily moved to other architectures; Cloud computing framework borrowed from MapReduce made use of services provided by cloud clients such as distribute memory, share memory and message to carry out mapping of splitting-primes batch factor algorithm, which solved the great cost of Bernstein’s method. Experiments show that, this framework is with good scalability and can be adapted to different sizes batch factor in which the scale of prime product varies from 1.5GB to 192GB, which enhances the usefulness of the algorithm significantly.

Related Articles | Metrics
Improved compression vertex chain code based on Huffman coding
WEI Wei LIU Yongkui DUAN Xiaodong GUO Chen
Journal of Computer Applications    2014, 34 (12): 3565-3569.  
Abstract203)      PDF (795KB)(595)       Save

This paper introduced the research works on all kinds of chain code used in image processing and pattern recognition and a new chain code named Improved Compressed Vertex Chain Code (ICVCC) was proposed based on Compressed Vertex Chain Code (CVCC). ICVCC added one code value compared with CVCC and adopted Huffman coding to encode each code value to achieve a set of chain code with unequal length. The expression ability per code, average length and efficiency as well as compression ratio with respect to 8-Directions Freeman Chain Code (8DFCC) were calculated respectively through the statistis a large number of images. The experimental results show that the efficiency of ICVCC proposed this paper is the highest and compression ratio is ideal.

Reference | Related Articles | Metrics
Removal of mismatches in scale-invariant feature transform algorithm using image depth information
LIU Zheng LIU Yongben
Journal of Computer Applications    2014, 34 (12): 3554-3559.  
Abstract192)      PDF (928KB)(797)       Save

Feature point matching is of central importance in feature-based image registration algorithms such as Scale-Invariant Feature Transform (SIFT) algorithm. Since most of the existed feature matching algorithms are not so powerful and efficient in mismatch removing, in this paper, a mismatch removal algorithm was proposed which adopted the depth information in an image to improve the performance. In the proposed approach, the depth map of an acquired image was produced using the clues of defocusing blurring effect, and machine learning algorithm, followed by SIFT feature point extraction. Then, the correct feature correspondences and the transformation between two feature sets were iteratively estimated using the RANdom SAmple Consensus (RANSAC) algorithm and exploiting the rule of local depth continuity. The experimental results demonstrate that the proposed algorithm outperforms conventional ones in mismatch removing.

Reference | Related Articles | Metrics
Cross-country path planning based on improved ant colony algorithm
WU Tianyi XU Jiheng LIU Yongjian
Journal of Computer Applications    2013, 33 (04): 1157-1160.   DOI: 10.3724/SP.J.1087.2013.01157
Abstract863)      PDF (663KB)(599)       Save
According to the vehicle's cross-country path planning problem, the general influence of the terrain slope and attribute of the earth's surface on path planning was researched and analyzed. With the introduction of "window moving method" to beforehand judgment and traffic cability analysis about terrain slope, the rating index about landform roughness of wheeled vehicles and crawler vehicles were established and terrain roughness was rasterized with the "area dominant method". Constraint effect of slope and roughness of were stacked in order to reduce the search scope and improve the search efficiency through establishing taboo list. The evaluation function of the improved ant colony algorithm was structured, and with reference to the path table, a path optimization algorithm was designed with the consideration of slope and roughness constraint. The simulation results show that the algorithm can effectively realize cross-country path planning in accordance with real terrain environment.
Reference | Related Articles | Metrics
Energy-aware dynamic application partitioning algorithm in mobile computing
NIU Rui-fang LIU Yong
Journal of Computer Applications    2012, 32 (12): 3295-3298.   DOI: 10.3724/SP.J.1087.2012.03295
Abstract638)      PDF (652KB)(567)       Save
The limited battery life is a big obstacle for the further growth of mobile devices, so a new dynamic application partitioning algorithm was proposed to minimize power consumption of mobile devices by offloading its computation to a remote resource-rich server. An Object Relation Graph (ORG) for such an application was set up, and then it was transformed into a network. By using network flow theory, the optimization problem of power consumption was transformed into the optimal bipartition problem of a flow network which can be partitioned by the max-flow min-cut algorithm. The simulation results show that the proposed algorithm can greatly save more energy than the existing algorithms, and better adapt to environment changes.
Related Articles | Metrics
Humanoid robot gait planning based on 3D linear inverted pendulum model
YU Guo-chen LIU Yong-xin LI Xiao-hong
Journal of Computer Applications    2012, 32 (09): 2643-2647.   DOI: 10.3724/SP.J.1087.2012.02643
Abstract1163)      PDF (816KB)(656)       Save
In order to real-time adjust the humanoid robot gait, a humanoid robot gait generation method was proposed. The robot motion was simplified to the inverted pendulum motion of the three-dimensional linear model, and passed the pre-planned Zero Moment Point (ZMP) trajectory. According to the Center of Mass (CoM) and ZMP relations, the CoM trajectory was obtained. The front side gait and lateral gait were simplified into seven-link and five-link structures, the triangle theorem was used to calculate the angle of each joint, and the ZMP equation was introduced to discuss the stability of the walking process. The system simulation was done according to given conditions and combined with the actual system and its operation conditions were analyzed to verify the validity of the proposed planning method.
Reference | Related Articles | Metrics
Threshold improvement method combining DSmT and DST
LIU Yong-kuo LING Shuang-han
Journal of Computer Applications    2012, 32 (04): 1037-1040.   DOI: 10.3724/SP.J.1087.2012.01037
Abstract972)      PDF (514KB)(468)       Save
Dezert-Smarandache Theory (DSmT) is a data fusion method, in which high conflicting evidence sources could be successfully handled, to efficiently realize multi-source information fusion. Meanwhile, Dempster-Shafer Theory (DST) can bring a better result with less computational cost on condition that conflicts are low. Therefore, integrating the two methods, the DST evidence theory will be adopted when the conflicts are lower, otherwise the Dezert-Smarandache Theory (DSmT) fusion algorithms will be used, which is a feasible way to raise the efficiency of the information fusion. The method of single-value switching thresholds for DSmT and DST has been proposed. According to the deficiency of the method,this article proposed that the conflict distance function can be regarded as the judgment basis. Thus, the single-value thresholds and the multi-spot value thresholds are distinguishable according to different evidence combinations.
Reference | Related Articles | Metrics
Multi-objective evolutionary algorithm for grid job scheduling based on adaptive neighborhood
YANG Ming XUE Sheng-jun CHEN Liang LIU Yong-sheng
Journal of Computer Applications    2012, 32 (03): 599-602.   DOI: 10.3724/SP.J.1087.2012.00599
Abstract1083)      PDF (608KB)(722)       Save
A new adaptive neighborhood Multi-Objective Grid Task Scheduling Algorithm (ANMO-GTSA) was proposed in this paper for the multi-objective job scheduling collaborative optimization problem in grid computing. In the ANMO-GTSA, an adaptive neighborhood method was applied to find the non-inferior set of solutions and maintain the diversity of the multi-objective job scheduling population. The experimental results indicate that the algorithm proposed in this paper can not only balance the multi-objective job scheduling, but also improve the resource utilization and efficiency of task execution. Moreover, the proposed algorithm can achieve better performance on time-dimension and cost-dimension than the traditional Min-min and Max-min algorithms.
Reference | Related Articles | Metrics
Design knowledge sharing platform based on functional ontology
XIONG Jing LIU Yong XU Jian-liang
Journal of Computer Applications    2011, 31 (10): 2804-2807.   DOI: 10.3724/SP.J.1087.2011.02804
Abstract1568)      PDF (642KB)(501)       Save
To overcome the deficiency in sharing and reusing design knowledge in manufacturing industry, a strategy based on functional ontology was proposed for design knowledge-sharing. The existing product structure can be mapped to its product functions by using functional ontology, and the design principles can be represented by functional decomposition tree. First, the basic framework of functional ontology was introduced. Then, the role of the functional decomposition tree to product design was analyzed. Finally, a knowledge-sharing platform for household appliances was designed and developed based on functional ontology. It was used to verify the proposed strategy. The experimental results show that the proposed strategy can effectively realize information retrieval, sharing and reuse of design knowledge in manufacturing industry. It can also shorten product development cycles.
Related Articles | Metrics
Method of SVM classifier generation based on fuzzy classification association rule
CUI Jian LI Qiang LIU Yong
Journal of Computer Applications    2011, 31 (05): 1348-1350.   DOI: 10.3724/SP.J.1087.2011.01348
Abstract1698)      PDF (650KB)(936)       Save
To increase the classification accuracy of the database classification system, this paper proposed a new classification method. Firstly, the continuous attributes were dispersed by the Fuzzy C-Mean (FCM) algorithm. Secondly, an improved fuzzy association method was proposed to mine the classification association rules. Eventually, the compatibility between the generated rules and patterns was used to construct a set of feature vectors, which were used to generate a classifier. The experimental results demonstrate that the method has high discrimination and efficiency.
Related Articles | Metrics
Sensitive information transmission scheme based on magic cube algorithm in automated trust negotiation
Jian-li LI Guang-lei HUO Bo LIU Yong GAO
Journal of Computer Applications    2011, 31 (04): 984-988.   DOI: 10.3724/SP.J.1087.2011.00984
Abstract1258)      PDF (816KB)(441)       Save
To solve the problem of transmitting credentials and other resources through unsafe physical channels during an Automated Trust Negotiation (ATN), a transmission scheme for credentials and resources was proposed based on magic cube algorithm. Through the magic cube algorithm, a transformation sequence was formed in terms of the request or the resource of negotiation initiator, followed by the digital digest to generate the information transformation sequence. According to the logical expression composed of credentials which represent the condition negotiation success, the information transformation sequence was shuffled to form an information transmission sequence, which was sent to the negotiation receiver. The information transmission sequence was reciprocally transformed by the negotiation receiver according to his own credentials. This scheme has many features of the one-round credential exchange, and little network cost. The example shows that the scheme is feasible, and the experimental results show that the scheme has good security and efficiency and low information transmission capacity.
Related Articles | Metrics
Study on fast packet filter under network processor IXP2400
ZHONG Ting,LIU Yong,GENG Ji
Journal of Computer Applications    2005, 25 (11): 2568-2570.  
Abstract1349)      PDF (617KB)(1329)       Save
The efficiency of packet filter is vital for firewalls.An efficient packet filter solution based on INTEL IXP2400 was talked about.This solution optimized packet filter through dynamic rule table,static rule tree and hardware hash unit.The firewall designed in this way can reach 1000M line speed.
Related Articles | Metrics
Workflow authorization model based on RBAC for CSCD system
XU Hong-xue,LIU Yong-xian
Journal of Computer Applications    2005, 25 (10): 2424-2427.  
Abstract1598)      PDF (682KB)(1062)       Save
 This paper proposed a workflow authorization model based on RBAC(Role-Based Access Control) for collaborative design system.Different from traditional access control authorization models,this model provided the notion of temporal-spatial permission which represents the fact that can only perform certain operation on a task for a certain time interval and a certain spatial range of Internet/Intranet,namely the workflow authorization based on RBAC for collaborative design system not only relates with time,but also relates with Internet/Intranet address.It can not only ensure that only authorized users could execute a task,but also ensure that the authorization flow is synchronized with workflow and the dynamic change of spatial range of Internet/Intranet address.
Related Articles | Metrics
Standard parts modelling platform for networked products development
LIU Yong,GAO Jian-min, CHEN Fu-min
Journal of Computer Applications    2005, 25 (06): 1417-1419.   DOI: 10.3724/SP.J.1087.2005.01417
Abstract1156)      PDF (168KB)(871)       Save
The method of modeling was discussed to support the construction of networked standard parts in this paper. Moreover, a web-based modeling environment was provided that integrates products’ standard information, functions and techniques. Programming and modeling methods and flows were analyzed particularly. The item management and workflow methods was implement in the system to construct the standard parts information for users in Internet. Besides, a platform was constructed to management the standard project and faced with products development.
Related Articles | Metrics